1
Driving Physician Engagement and Patient Outcomes with AI
Session #210, Thursday, February 14, 2019, 10:00-11:00am
Dipti Patel-Misra, PhD, MBA, Chief Data and Analytics Officer, Vituity
Joshua H Tamayo-Sarver, MD, PhD, CPHIMSS, VP of Informatics, Vituity
2
Dipti Patel-Misra, PhD, MBA
Joshua H Tamayo-Sarver, MD, PhD
Has no real or apparent conflicts of interest to report.
Conflict of Interest
3
Process for building tools
Determine how to build it
Test your assumptions
Build an AI model
Ensure continuous learning of the AI model
Agenda
4
1. List the attributes of a sound AI/Machine Learning model
2. Analyze the process to engage physicians in using predictive
tools
3. State the advantages of stratifying risk in the acute coronary
syndrome (ACS) example
Learning Objectives
5
Do stakeholders recognize that
they have the problem?
Do physicians want to
make better
admit/discharge decisions
for chest pain patients?
How do data scientists/
engineers make tools that
are needed by physicians?
Process for building tools
6
If there was a “product” that
solved the problem, would
stakeholders use it?
Would physicians use a
tool that helps them make
better admit/discharge
decisions for chest pain
patients?
Would data scientists/
engineers build different
products if they knew what
the physicians want?
Process for building tools
7
Can we build a sustainable
business around the product?
Could we implement the
tool across sites to reach
our target users given
realistic revenue and
expense projections?
Could we build tools are
are intuitive to use, easy
to scale, and deliver
measurable returns?
Process for building tools
8
Can we build a “product” to
solve the problem?
Can we build a tool
that accurately predicts
the likelihood of
adverse cardiac event
in a way that is usable
and scalable?
Can we calculate the
ROI for this tool?
Process for building tools
9
Physicians make many decisions
In an emergency department,
decision is often made at
bedside in first minute or so of
encounter
Remainder of encounter is
ensuring the decision is
validated
Physicians want to make better
decisions
10
Physicians believe they are right
Serve as their own gold standard
The patient has pneumonia because I diagnosed
“pneumonia”
Physicians want to make better
decisions
11
Information that justifies what I want
to do
I believe the patient is safe to
send home, I want information
that confirms my decision.
Information that warns me I am
about to make a mistake
I believe the patient is safe to
send home, I want to be
notified if I am wrong.
If I believe the patient should be
admitted, then a predictive model
will not change my mind.
As a Physician, what information do I
want?
12
Process for building tools
Determine how to build it
Test your assumptions
Build an AI model
Ensure continuous learning of the AI model
Agenda
13
Data scientists and engineers build
many tools
They have a wealth of data
and numerous opportunities to
build AI/ ML models
Key is to provide clinicians
with the right model at the right
time in the right way so that
Tools are utilized and drive
value
Data scientists/ engineers want to
tools that are used and drive change
14
Align with the desired
outcome
Understand the physician
workflow
Build the required criteria
As a data scientist/engineer where do
I start?
15
Information is desired in the context of clinician wanting to send
patient home
Model needs to have high sensitivity to detect risk of adverse
events
Safe to discharge based on model is truly safe
Criteria
Must have miss rate of less than 2%
Must be better than available tools with less effort
HEART score, TIMI score
Workflow criteria
Must be “pushed” to clinician prior to making decision, which is
the first point of contact
Prediction must surface to clinician prior to evaluating the
patient
Assumptions of successful AI
prediction
16
Why is it crucial that predictive tools are used regularly?
A. They continually learn and improve
B. They grow stale
C. CMS mandates it
D. Stakeholders will forget how to use them
Question 1
17
Process for building tools
Determine how to build it
Test your assumptions
Build an AI model
Ensure continuous learning of the AI model
Agenda
18
The minimum viable product (MVP)
Meet the physician requirements
Deliver value
Get feedback for future product release
MVP
Concept
Product
Data
Code
Build
Deploy
Feedbac
k
Enhanc
e
19
Create a conceptual tool design
Test with group of practicing physicians
Meets assumptions
<2% miss rate
Superior performance to existing tools
Available prior to seeing the patient
Delivered in a specific format (likelihood with confidence interval)
“Given this clinical scenario, would having this information [show
the tool] change your decision, make you more confident of your
decision, or not have much value?”
“How much would you pay for this information?”
The Cardiac minimum viable product
20
Learned that the information was not
actionable for the clinician
Needed two elements from the predicted
risk score
Composite risk of any cardiac event
Acute Myocardial Infarction
Coronary Artery Bypass Graft
surgery
Percutaneous Coronary
Intervention (Cardiac cath)
All-cause mortality
Confidence of predicted risk score
The physician feedback
21
What’s the best approach to engage physicians in using
AI/predictive models?
A. Ask them to input data
B. Impose the tools upon them
C. Ignore their apprehensions
D. Help them solve a problem
Question 2
22
Process for building tools
Determine how to build it
Test your assumptions
Build an AI model
Ensure continuous learning of the AI model
Agenda
23
Identify the product - outcomes, timing, and insights needed by
the clinician
Identify the required data
Prepare the required data sets
Build and train models
Validate model accuracy
Test model
Tune model parameters
Build an AI model
24
Process for building tools
Determine how to build it
Test your assumptions
Build an AI model
Ensure continuous learning of the AI model
Agenda
25
The point of the tool is change outcomes
Outcomes are changed when the tool successfully changes behavior
The model was derived from data when all were behaving in a certain
manner
The successful tool changes the behaviors on which the model was
built
Thus, successful tools make themselves inaccurate over time
The tools can destroy their own
accuracy
26
Model should be built such that it can learn continuously and
quickly adapt once deployed
Gather real time feedback on the model once deployed
Incorporate these data elements to incrementally improve the
model
Creating a learning AI system
27
Start with the “so what” questions
Do stakeholders recognize that they have the problem?
If there was a “product” that solved the problem, would
stakeholders use it?
Can we build a sustainable business around the product?
Can we build a “product” to solve the problem?
The AI should be built around sound principles to solve the
problem as defined by the “so what” questions
Iterate the learning to ensure accuracy as the target starts to
move
When the AI-powered product meets the “so what” questions
defined by the clinicians, engagement and adoption are seamless
Conclusion
28
True or False: An AI/Machine Learning model must be able to
explain its conclusions.
Question 3
29
Dipti Patel-Misra, PhD, MBA
Dipti.patel@vituity.com
Josh Tamayo-Sarver, MD, PhD
Joshua.tamayosarver@vituity.com
Please complete online session evaluation
Questions